Accurate airway extraction from computed tomography (CT) images is a critical step for planning navigation bronchoscopy and quantitative assessment of airway-related chronic obstructive pulmonary disease (COPD). The existing methods are challenging to sufficiently segment the airway, especially the high-generation airway, with the constraint of the limited label and cannot meet the clinical use in COPD. We propose a novel two-stage 3D contextual transformer-based U-Net for airway segmentation using CT images. The method consists of two stages, performing initial and refined airway segmentation. The two-stage model shares the same subnetwork with different airway masks as input. Contextual transformer block is performed both in the encoder and decoder path of the subnetwork to finish high-quality airway segmentation effectively. In the first stage, the total airway mask and CT images are provided to the subnetwork, and the intrapulmonary airway mask and corresponding CT scans to the subnetwork in the second stage. Then the predictions of the two-stage method are merged as the final prediction. Extensive experiments were performed on in-house and multiple public datasets. Quantitative and qualitative analysis demonstrate that our proposed method extracted much more branches and lengths of the tree while accomplishing state-of-the-art airway segmentation performance. The code is available at https://github.com/zhaozsq/airway_segmentation.
translated by 谷歌翻译
现有的广告点击率(CTR)预测模型主要取决于行为ID功能,这些功能是根据历史用户AD交互所学习的。然而,依赖历史用户行为的行为ID功能是不可行的,可以在没有以前与用户互动的情况下描述新广告。为了克服对新广告建模的行为ID特征的局限性,我们利用广告中的视觉内容来提高CTR预测模型的性能。具体来说,我们根据其视觉内容将每个广告映射到一组视觉ID中。这些视觉ID进一步用于生成可视觉嵌入,以增强CTR预测模型。我们将视觉ID的学习分为有监督的量化问题。由于缺乏广告中商业图像的类标签,因此我们利用图像文本描述作为监督,以优化图像提取器以生成有效的视觉ID。同时,由于硬量化是不可差异的,因此我们软化量化操作以使其支持端到端网络培训。将每个图像映射到视觉ID之后,我们根据过去积累的历史用户AD交互学习每个视觉ID的嵌入。由于视觉ID嵌入仅取决于视觉内容,因此它概括为新广告。同时,嵌入视觉ID补充了AD行为ID嵌入。因此,它可以大大提高CTR预测模型的性能,以前依赖于积累了丰富用户行为的新广告和广告的行为ID功能。将视觉ID嵌入在BAIDU在线广告的CTR预测模型中后,AD的平均CTR提高了1.46%,总费用增加了1.10%。
translated by 谷歌翻译
通信技术的进步和智能手机的普及促进了视频广告的蓬勃发展。百度是世界领先的搜索引擎公司之一,每天收到数十亿个搜索查询。如何将视频广告与用户搜索配对是百度视频广告的核心任务。由于模态差距,比传统查询对象检索和图像到图像搜索更具挑战性的查询性检索更具挑战性。传统上,查询到视频检索是通过查询到标题检索来解决的,当瓷砖的质量不高时,这是不可靠的。近年来,随着计算机视觉和自然语言处理的快速进展,基于内容的搜索方法变得有望在查询到视频检索中。受益于大规模数据集的预处理,一些基于跨模式关注的Visionbert方法在许多视觉语言任务中不仅在学术界而且在行业中都取得了出色的表现。然而,跨模式关注的昂贵计算成本使得在工业应用中进行大规模搜索是不切实际的。在这项工作中,我们提出了一个基于树的组合注意网络(TCAN),该网络最近在百度的动态视频广告平台上推出。它提供了一种实用的解决方案,可以在大规模查询到视频搜索中部署大量的跨模式关注。在启动基于树的组合注意网络之后,点击率提高了2.29 \%,转化率提高了2.63 \%。
translated by 谷歌翻译
近年来,在各种特定于任务的情况下,盲目图像质量评估(BIQA)取得了巨大的成功,这些方案呈现出不变的失真类型和评估标准。但是,由于刚性结构和学习框架,它们不能应用于交叉任务BIQA方案,在这种情况下,失真类型和评估标准在实际应用中不断变化。本文提出了一个可扩展的增量学习框架(SILF),该框架可以在多个评估任务中依次执行BIQA,具有有限的记忆能力。更具体地说,我们开发了动态参数隔离策略,以依次更新特定于任务的参数子集,这些参数子集彼此之间并非重叠。每个参数子集都会暂时解决,以记住对其相应任务的一个评估偏好,并且可以在以下BIQA中自适应地重复使用先前的参数子集,以根据任务相关性实现更好的性能。为了抑制顺序任务学习中记忆容量的不受限制扩展,我们通过从先前解决的参数子集中逐渐和选择性地修剪不重要的神经元来开发可扩展的内存单元,这使我们能够忘记以前的经验的一部分,并释放有限的内存能力,以适应适应新的新任务。对11个IQA数据集进行的广泛实验表明,我们提出的方法在交叉任务BIQA中的其他最新方法显着优于其他最新方法。
translated by 谷歌翻译
多年来,Yolo系列一直是有效对象检测的事实上的行业级别标准。尤洛社区(Yolo Community)绝大多数繁荣,以丰富其在众多硬件平台和丰富场景中的使用。在这份技术报告中,我们努力将其限制推向新的水平,以坚定不移的行业应用心态前进。考虑到对真实环境中速度和准确性的多种要求,我们广泛研究了行业或学术界的最新对象检测进步。具体而言,我们从最近的网络设计,培训策略,测试技术,量化和优化方法中大量吸收了思想。最重要的是,我们整合了思想和实践,以在各种规模上建立一套可供部署的网络,以适应多元化的用例。在Yolo作者的慷慨许可下,我们将其命名为Yolov6。我们还向用户和贡献者表示热烈欢迎,以进一步增强。为了了解性能,我们的Yolov6-N在NVIDIA TESLA T4 GPU上以1234 fps的吞吐量在可可数据集上击中35.9%的AP。 Yolov6-S在495 fps处的43.5%AP罢工,在相同规模〜(Yolov5-S,Yolox-S和Ppyoloe-S)上超过其他主流探测器。我们的量化版本的Yolov6-S甚至在869 fps中带来了新的43.3%AP。此外,与其他推理速度相似的检测器相比,Yolov6-m/L的精度性能(即49.5%/52.3%)更好。我们仔细进行了实验以验证每个组件的有效性。我们的代码可在https://github.com/meituan/yolov6上提供。
translated by 谷歌翻译
胃肠道内窥镜手术(GES)对仪器的大小和远端灵巧性有很高的要求,因为内窥镜通道狭窄和曲折的人类胃肠道。本文利用镍钛(NITI)电线来开发微型3-DOF(俯仰 - 翻译)柔性平行机器人手腕(FPRW)。此外,我们在手腕的连接界面上组装了一把电刀,然后对其进行了毛细管,以在猪胃中进行内窥镜粘膜下清扫术(ESD)。每个ESD工作流程中的有效性能证明了设计的FPRW具有足够的工作空间,高远端灵量和高定位精度。
translated by 谷歌翻译
人群的理解由于其重要的实际意义引起了人们对视觉领域的普遍兴趣。不幸的是,没有努力探索桥接自然语言和计算机视觉的多模式领域中的人群理解。参考表达理解(REF)是一项代表性的多模式任务。当前的REF研究更多地集中在一般情况下从多个独特类别中扎根目标对象。很难应用于复杂的现实世界人群的理解。为了填补这一空白,我们提出了一个新的挑战性数据集,称为Refcrowd,该数据集旨在通过参考表达方式寻找人群中的目标人。它不仅需要充分挖掘自然语言信息,而且还需要仔细地专注于目标与具有相似外观的人群之间的细微差异,以实现从语言到视觉的细粒度映射。此外,我们提出了一个细粒度的多模式属性对比网络(FMAC),以在人群的理解中处理参考。它首先将复杂的视觉和语言特征分解为属性感知的多模式特征,然后捕获歧视性但健壮性的细粒属性特征,以有效地区分相似人之间的这些细微差异。所提出的方法优于我们的档案数据集和现有参考数据集中的现有最新方法(SOTA)方法。此外,我们为多模式域中的更深入研究实施了端到端的REF工具箱。我们的数据集和代码可以在:\ url {https://qiuheqian.github.io/datasets/refcrowd/}中可用。
translated by 谷歌翻译
我们提出了一种有效的神经邻域搜索(N2S),以解决取货和交付问题(PDPS)。具体而言,我们设计了强大的综合注意力,可以使香草自我注意力综合有关路线解决方案的各种特征。我们还利用了两个自定义的解码器,它们会自动学习执行拾取节点对的删除和重新插入以应对优先限制。此外,利用多样性增强方案以进一步改善性能。我们的N2是通用的,并且对两个规范PDP变体进行了广泛的实验表明,它可以在现有神经方法之间产生最新的结果。此外,它甚至超过了众所周知的LKH3求解器在更受限的PDP变体上。我们针对N2S的实施可在线获得。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译